Goto

Collaborating Authors

 goal accuracy


Designing Tools with Control Confidence

Meera, Ajith Anil, Torres, Abian, Lanillos, Pablo

arXiv.org Artificial Intelligence

Prehistoric humans invented stone tools for specialized tasks by not just maximizing the tool's immediate goal-completion accuracy, but also increasing their confidence in the tool for later use under similar settings. This factor contributed to the increased robustness of the tool, i.e., the least performance deviations under environmental uncertainties. However, the current autonomous tool design frameworks solely rely on performance optimization, without considering the agent's confidence in tool use for repeated use. Here, we take a step towards filling this gap by i) defining an optimization framework for task-conditioned autonomous hand tool design for robots, where ii) we introduce a neuro-inspired control confidence term into the optimization routine that helps the agent to design tools with higher robustness. Through rigorous simulations using a robotic arm, we show that tools designed with control confidence as the objective function are more robust to environmental uncertainties during tool use than a pure accuracy-driven objective. We further show that adding control confidence to the objective function for tool design provides a balance between the robustness and goal accuracy of the designed tools under control perturbations. Finally, we show that our CMAES-based evolutionary optimization strategy for autonomous tool design outperforms other state-of-the-art optimizers by designing the optimal tool within the fewest iterations. Code: https://github.com/ajitham123/Tool_design_control_confidence.


Probing the Robustness of Theory of Mind in Large Language Models

Nickel, Christian, Schrewe, Laura, Flek, Lucie

arXiv.org Artificial Intelligence

With the success of ChatGPT and other similarly sized SotA LLMs, claims of emergent human like social reasoning capabilities, especially Theory of Mind (ToM), in these models have appeared in the scientific literature. On the one hand those ToM-capabilities have been successfully tested using tasks styled similar to those used in psychology (Kosinski, 2023). On the other hand, follow up studies showed that those capabilities vanished when the tasks were slightly altered (Ullman, 2023). In this work we introduce a novel dataset of 68 tasks for probing ToM in LLMs, including potentially challenging variations which are assigned to 10 complexity classes. This way it is providing novel insights into the challenges LLMs face with those task variations. We evaluate the ToM performance of four SotA open source LLMs on our dataset and the dataset introduced by (Kosinski, 2023). The overall low goal accuracy across all evaluated models indicates only a limited degree of ToM capabilities. The LLMs' performance on simple complexity class tasks from both datasets are similar. Whereas we find a consistent tendency in all tested LLMs to perform poorly on tasks that require the realization that an agent has knowledge of automatic state changes in its environment, even when those are spelled out to the model. For task complications that change the relationship between objects by replacing prepositions, we notice a performance drop in all models, with the strongest impact on the mixture-of-experts model. With our dataset of tasks grouped by complexity we offer directions for further research on how to stabilize and advance ToM capabilities in LLM.


Granular Change Accuracy: A More Accurate Performance Metric for Dialogue State Tracking

Aksu, Taha, Chen, Nancy F.

arXiv.org Artificial Intelligence

They: i) erroneously presume a uniform distribution of slots throughout the dialog, ii) neglect to assign partial scores for individual turns, iii) frequently overestimate or underestimate performance by repeatedly counting the models' successful or failed predictions. To address these shortcomings, we introduce a novel metric: Granular Change Accuracy (GCA). GCA focuses on evaluating the predicted changes in dialogue state over the entire dialogue history. Benchmarking reveals that GCA effectively reduces biases arising from distribution uniformity and the positioning of errors across turns, resulting in a more precise evaluation. Notably, we find that these biases are particularly pronounced when evaluating few-shot or zero-shot trained models, becoming even more evident as the model's error rate increases. Hence, GCA offers significant promise, particularly for assessing models trained with limited resources. Our GCA implementation is a useful addition to the pool of DST metrics.


Contextual Data Augmentation for Task-Oriented Dialog Systems

Axman, Dustin, Ray, Avik, Garg, Shubham, Huang, Jing

arXiv.org Artificial Intelligence

Alexa, Siri, Google assistant) are able to accomplish various tasks by interacting with them via natural language conversation. Task-oriented dialog models form the core technology behind these applications, which understands users' natural language utterances [1, 2], keeps track of the conversation [3, 4], performs requested tasks (e.g. API calls) [5, 6], and generates appropriate meaningful response to the user [7, 8]. Training neural task-oriented dialog models [9, 10, 11], requires a large amount of annotated data, which is difficult to obtain for model developers. While crowd-sourcing and dialog simulation based on agent interplay [12, 13] addresses this issue to a certain extent, these are slow and don't provide sufficient coverage of different natural language (NL) user turn surface form variations. Recently, large pre-trained language models (e.g. GPT-2 [14], T5 [15]) have been successfully used to generate fluent agent dialog responses, both with dialog context [16, 8, 17] or without it [18, 19]. However, it is unclear if similar models can capture the large variation of user turn distribution in such task-oriented dialogs. Previous work on data augmentation for spoken language understanding has largely focused on generating paraphrases of user utterance, with a specific goal and set of entities [20, 21, 22]. However, such utterances again fail to provide sufficient coverage of the large semantic space possible between dialog turns, and may not improve performance of downstream task-oriented dialog systems.


Choice Fusion as Knowledge for Zero-Shot Dialogue State Tracking

Su, Ruolin, Yang, Jingfeng, Wu, Ting-Wei, Juang, Biing-Hwang

arXiv.org Artificial Intelligence

Nowadays, the requirements of deploying an increasing number of services across a variety of domains raise challenges With the demanding need for deploying dialogue systems in to DST models in production [4]. However, existing new domains with less cost, zero-shot dialogue state tracking dialogue datasets only span a few domains, making it impossible (DST), which tracks user's requirements in task-oriented dialogues to train a DST model upon all conceivable conversation without training on desired domains, draws attention flows [5]. Furthermore, dialogue systems are required to infer increasingly. Although prior works have leveraged questionanswering dialogue states with dynamic techniques and offer diverse (QA) data to reduce the need for in-domain training interfaces for different services. Despite the fact that the copy in DST, they fail to explicitly model knowledge transfer mechanism [6] or dialogue acts [7] are leveraged to efficiently and fusion for tracking dialogue states. To address this issue, track slots and values in the dialogue history, the performance we propose CoFunDST, which is trained on domain-agnostic of DST still relies on a large number of annotations of dialogue QA datasets and directly uses candidate choices of slot-values states, which is expensive and inefficient to collect data as knowledge for zero-shot dialogue-state generation, based for every new domain and service.


Oh My Mistake!: Toward Realistic Dialogue State Tracking including Turnback Utterances

Kim, Takyoung, Lee, Yukyung, Yoon, Hoonsang, Kang, Pilsung, Bang, Junseong, Kim, Misuk

arXiv.org Artificial Intelligence

The primary purpose of dialogue state tracking (DST), a critical component of an end-to-end conversational system, is to build a model that responds well to real-world situations. Although we often change our minds from time to time during ordinary conversations, current benchmark datasets do not adequately reflect such occurrences and instead consist of over-simplified conversations, in which no one changes their mind during a conversation. As the main question inspiring the present study, "Are current benchmark datasets sufficiently diverse to handle casual conversations in which one changes their mind after a certain topic is over?" We found that the answer is "No" because DST models cannot refer to previous user preferences when template-based turnback utterances are injected into the dataset. Even in the the simplest mind-changing (turnback) scenario, the performance of DST models significantly degenerated. However, we found that this performance degeneration can be recovered when the turnback scenarios are explicitly designed in the training set, implying that the problem is not with the DST models but rather with the construction of the benchmark dataset.


CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking

Zhang, Haoning, Bao, Junwei, Sun, Haipeng, Luo, Huaishao, Li, Wenye, Cui, Shuguang

arXiv.org Artificial Intelligence

Few-shot dialogue state tracking (DST) is a realistic problem that trains the DST model with limited labeled data. Existing few-shot methods mainly transfer knowledge learned from external labeled dialogue data (e.g., from question answering, dialogue summarization, machine reading comprehension tasks, etc.) into DST, whereas collecting a large amount of external labeled data is laborious, and the external data may not effectively contribute to the DST-specific task. In this paper, we propose a few-shot DST framework called CSS, which Combines Self-training and Self-supervised learning methods. The unlabeled data of the DST task is incorporated into the self-training iterations, where the pseudo labels are predicted by a DST model trained on limited labeled data in advance. Besides, a contrastive self-supervised method is used to learn better representations, where the data is augmented by the dropout operation to train the model. Experimental results on the MultiWOZ dataset show that our proposed CSS achieves competitive performance in several few-shot scenarios.


Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State Tracking

Su, Ruolin, Wu, Ting-Wei, Juang, Biing-Hwang

arXiv.org Artificial Intelligence

As an essential component in task-oriented dialogue systems, dialogue state tracking (DST) aims to track human-machine interactions and generate state representations for managing the dialogue. Representations of dialogue states are dependent on the domain ontology and the user's goals. In several task-oriented dialogues with a limited scope of objectives, dialogue states can be represented as a set of slot-value pairs. As the capabilities of dialogue systems expand to support increasing naturalness in communication, incorporating dialogue act processing into dialogue model design becomes essential. The lack of such consideration limits the scalability of dialogue state tracking models for dialogues having specific objectives and ontology. To address this issue, we formulate and incorporate dialogue acts, and leverage recent advances in machine reading comprehension to predict both categorical and non-categorical types of slots for multi-domain dialogue state tracking. Experimental results show that our models can improve the overall accuracy of dialogue state tracking on the MultiWOZ 2.1 dataset, and demonstrate that incorporating dialogue acts can guide dialogue state design for future task-oriented dialogue systems.


Slot Self-Attentive Dialogue State Tracking

Ye, Fanghua, Manotumruksa, Jarana, Zhang, Qiang, Li, Shenghui, Yilmaz, Emine

arXiv.org Artificial Intelligence

An indispensable component in task-oriented dialogue systems is the dialogue state tracker, which keeps track of users' intentions in the course of conversation. The typical approach towards this goal is to fill in multiple pre-defined slots that are essential to complete the task. Although various dialogue state tracking methods have been proposed in recent years, most of them predict the value of each slot separately and fail to consider the correlations among slots. In this paper, we propose a slot self-attention mechanism that can learn the slot correlations automatically. Specifically, a slot-token attention is first utilized to obtain slot-specific features from the dialogue context. Then a stacked slot self-attention is applied on these features to learn the correlations among slots. We conduct comprehensive experiments on two multi-domain task-oriented dialogue datasets, including MultiWOZ 2.0 and MultiWOZ 2.1. The experimental results demonstrate that our approach achieves state-of-the-art performance on both datasets, verifying the necessity and effectiveness of taking slot correlations into consideration.


Improving Limited Labeled Dialogue State Tracking with Self-Supervision

Wu, Chien-Sheng, Hoi, Steven, Xiong, Caiming

arXiv.org Artificial Intelligence

Existing dialogue state tracking (DST) models require plenty of labeled data. However, collecting high-quality labels is costly, especially when the number of domains increases. In this paper, we address a practical DST problem that is rarely discussed, i.e., learning efficiently with limited labeled data. We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior. We encourage a DST model to have consistent latent distributions given a perturbed input, making it more robust to an unseen scenario. We also add an auxiliary utterance generation task, modeling a potential correlation between conversational behavior and dialogue states. The experimental results show that our proposed self-supervised signals can improve joint goal accuracy by 8.95\% when only 1\% labeled data is used on the MultiWOZ dataset. We can achieve an additional 1.76\% improvement if some unlabeled data is jointly trained as semi-supervised learning. We analyze and visualize how our proposed self-supervised signals help the DST task and hope to stimulate future data-efficient DST research.